Goto

Collaborating Authors

 deepfake detection challenge


What To Do About Deepfakes

Communications of the ACM

Synthetic media technologies are rapidly advancing, making it easier to generate nonveridical media that look and sound increasingly realistic. So-called "deepfakes" (owing to their reliance on deep learning) often present a person saying or doing something they have not said or done. The proliferation of deepfakesa creates a new challenge to the trustworthiness of visual experience, and has already created negative consequences such as nonconsensual pornography,11 political disinformation,19 and financial fraud.3 Deepfakes can harm viewers by deceiving or intimidating, harm subjects by causing reputational damage, and harm society by undermining societal values such as trust in institutions.7 What can be done to mitigate these harms?


Facebook uses Amazon EC2 to evaluate the Deepfake Detection Challenge

#artificialintelligence

In October 2019, AWS announced that it was working with Facebook, Microsoft, and the Partnership on AI on the first Deepfake Detection Challenge. Deepfake algorithms are the same as the underlying technology that has given us realistic animation effects in movies and video games. Unfortunately, those same algorithms have been used by bad actors to blur the distinction between reality and fiction. Deepfake videos result from using artificial intelligence to manipulate audio and video to make it appear as though someone did or said something they didn't. For more information about deepfake content, see The Partnership on AI Steering Committee on AI and Media Integrity.


Facebook announces the winner of its Deepfake Detection Challenge

Engadget

In September of 2019, Facebook launched its Deepfake Detection Challenge (DFDC) -- a public contest to develop autonomous algorithmic detection systems to combat the emerging threat of deepfake videos. After nearly a year, the social media platform announced the winners of the challenge, out of a pool of more than 2,000 global competitors. Deepfakes present a unique challenge to social media platforms. Capable of being produced with little more than a consumer-grade GPU and software that can be downloaded from the internet. With it, individuals can quickly and easily create fraudulent video clips, the subjects of which appearing to say or do things that they actually didn't.


A Report on the Deepfake Detection Challenge - The Partnership on AI

#artificialintelligence

These insights and recommendations highlight the importance of coordination and collaboration among actors in the information ecosystem. Journalists, fact-checkers, policymakers, civil society organizations, and others outside of the largest technology companies who are dealing with the potential malicious use of synthetic media globally need increased access to useful technical detection tools and other resources for evaluating content. At the same time, these tools and resources need to be inaccessible to adversaries working to generate malicious synthetic content that evades detection. Overall, detection models and tools must be grounded in the real-world dynamics of synthetic media detection and an informed understanding of their impact and usefulness.


Fake Trump video? How to spot deepfakes on Facebook and YouTube ahead of the presidential election

USATODAY - Tech Top Stories

But, says Kambhampati, the rapid improvements in deepfake technology means that we will soon have to rely on AI techniques to detect what the human eye cannot. "There is not a 100% foolproof way of identifying deepfakes, not even for AI researchers," Thomas says. "Detection is always going to be an arms race. As people develop more accurate detection algorithms, fakers will develop even more sophisticated frauds." There are non-technical ways to sniff out a deepfake, just like other forms of disinformation. Ask yourself: Who is the person publishing this information?


Facebook, Microsoft, and others launch Deepfake Detection Challenge

#artificialintelligence

Deepfakes, or media that takes a person in an existing image, audio recording, or video and replaces them with someone else's likeness using AI algorithms, are multiplying quickly. Amsterdam-based cybersecurity startup Deeptrace found 14,698 deepfake videos on the internet during its most recent tally in June and July, up from 7,964 last December -- an 84% increase within only seven months. That's troublesome not only because deepfakes might be used to sway public opinion during, say, an election, or to implicate someone in a crime they didn't commit, but because the technology has already generated pornographic material and swindled firms out of hundreds of millions of dollars. In an effort to fight deepfakes' spread, Facebook -- along with Amazon Web Services (AWS), Microsoft, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley; University of Maryland, College Park; and State University of New York at Albany -- are spearheading the Deepfake Detection Challenge, which was announced in September. It's launching globally at the NeurIPS 2019 conference in Vancouver this week, with the goal of catalyzing research to ensure the development of open source detection tools.


In the battle against deepfakes, AI is being pitted against AI

#artificialintelligence

Lying has never looked so good, literally. Concern over increasingly sophisticated technology able to create convincingly faked videos and audio, so-called'deepfakes', is rising around the world. But at the same time they're being developed, technologists are also fighting back against the falsehoods. "The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy," Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC in December 2018. She said deepfakes are potentially the next generation of disinformation.


The Partnership on AI Steering Committee on AI and Media Integrity - The Partnership on AI

#artificialintelligence

Advances in AI and computer graphics over the last several years are now being harnessed to create, modify, and disseminate modified or fabricated images, audio, and video content, often referred to broadly as synthetic media. These new content generation and modification capabilities have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions--especially given that some of these techniques may be used maliciously as a source of misinformation, manipulation, harassment, and persuasion. The ability to create synthetic or manipulated content that is difficult to discern from real events frames the urgent need for developing new capabilities for detecting such content, and for authenticating trusted media and news sources. AI techniques are being developed to detect and defend against synthetic and modified content. However, further investment and collaboration will be required for the advancement and application of these techniques, and for strengthening capacity in organizations and communities affected by these developments.


Facebook, Microsoft launch contest to detect deepfake videos - Reuters

#artificialintelligence

The social media giant is putting $10 million into the "Deepfake Detection Challenge," which aims to spur detection research. As part of the project, Facebook is commissioning researchers to produce realistic deepfakes to create a data set for testing detection tools. The company said the videos, which will be released in December, will feature paid actors and that no user data would be utilized. In the run-up to the U.S. presidential election in November 2020, social platforms have been under pressure to tackle the threat of deepfakes, which use artificial intelligence to create hyper-realistic videos where a person appears to say or do something they did not. While there has not been a well-crafted deepfake video with major political consequences in the United States, the potential for manipulated video to cause turmoil was recently demonstrated by a "cheapfake" clip of House Speaker Nancy Pelosi, manually slowed down to make her speech seem slurred.


Facebook, Microsoft and the Partnership on AI Form the Deepfake Detection Challenge

#artificialintelligence

Facebook chief technology officer Mike Schroepfer said in a blog post this week that the social network is taking steps to combat deepfakes, with some highly qualified assistance. Facebook is teaming up with Microsoft, the Partnership on Artificial Intelligence to Benefit People and Society and academics from Cornell Tech, Massachusetts Institute of Technology, the University of Oxford, University of California-Berkeley, the University of Maryland-College Park and the University at Albany-State University of New York on the Deepfake Detection Challenge. The goal of the Deepfake Detection Challenge is to develop technology to better detect when AI was used to alter videos and mislead viewers. Schroepfer said the Deepfake Detection Challenge will include a realistic data set, featuring paid actors, adding that user data from the social network will not be used. Facebook is dedicating over $10 million to fund the industrywide effort, including the funding of research collaboration and prizes for the challenge.